261 research outputs found
Approximation with Tensor Networks. Part II: Approximation Rates for Smoothness Classes
We study the approximation by tensor networks (TNs) of functions from
classical smoothness classes. The considered approximation tool combines a
tensorization of functions in , which allows to identify a
univariate function with a multivariate function (or tensor), and the use of
tree tensor networks (the tensor train format) for exploiting low-rank
structures of multivariate functions. The resulting tool can be interpreted as
a feed-forward neural network, with first layers implementing the
tensorization, interpreted as a particular featuring step, followed by a
sum-product network with sparse architecture. In part I of this work, we
presented several approximation classes associated with different measures of
complexity of tensor networks and studied their properties. In this work (part
II), we show how classical approximation tools, such as polynomials or splines
(with fixed or free knots), can be encoded as a tensor network with controlled
complexity. We use this to derive direct (Jackson) inequalities for the
approximation spaces of tensor networks. This is then utilized to show that
Besov spaces are continuously embedded into these approximation spaces. In
other words, we show that arbitrary Besov functions can be approximated with
optimal or near to optimal rate. We also show that an arbitrary function in the
approximation class possesses no Besov smoothness, unless one limits the depth
of the tensor network.Comment: For part I see arXiv:2007.00118, for part III see arXiv:2101.1193
Exploring Mobile Commerce Adoption Maturity: An Empirical Investigation
With the proliferation of mobile devices, studies on Mobile Commerce (MC) adoption have received increasing attention from researchers in Information technology. While there are many studies in the literature that have investigated MC adoption by individuals, these studies mainly investigate the factors that lead to usage. However, they do not examine how individuals may progress or mature from basic use of mobile devices to more sophisticated usage. In this study, we develop MC Adoption Maturity Model to show how individuals may mature in MC adoption. This model is examined by conducting qualitative data with 10 individuals. The study enriches our understanding of technology adoption by individuals because it explains how existing users of a technology, such as mobile technology, advance in their MC usage
Approximation with Tensor Networks. Part I: Approximation Spaces
We study the approximation of functions by tensor networks (TNs). We show
that Lebesgue -spaces in one dimension can be identified with tensor
product spaces of arbitrary order through tensorization. We use this tensor
product structure to define subsets of of rank-structured functions of
finite representation complexity. These subsets are then used to define
different approximation classes of tensor networks, associated with different
measures of complexity. These approximation classes are shown to be
quasi-normed linear spaces. We study some elementary properties and
relationships of said spaces. In part II of this work, we will show that
classical smoothness (Besov) spaces are continuously embedded into these
approximation classes. We will also show that functions in these approximation
classes do not possess any Besov smoothness, unless one restricts the depth of
the tensor networks. The results of this work are both an analysis of the
approximation spaces of TNs and a study of the expressivity of a particular
type of neural networks (NN) -- namely feed-forward sum-product networks with
sparse architecture. The input variables of this network result from the
tensorization step, interpreted as a particular featuring step which can also
be implemented with a neural network with a specific architecture. We point out
interesting parallels to recent results on the expressivity of rectified linear
unit (ReLU) networks -- currently one of the most popular type of NNs.Comment: For part II see arXiv:2007.00128, for part III see arXiv:2101.1193
Approximation with Tensor Networks. Part III: Multivariate Approximation
We study the approximation of multivariate functions with tensor networks
(TNs). The main conclusion of this work is an answer to the following two
questions: "What are the approximation capabilities of TNs?" and "What is an
appropriate model class of functions that can be approximated with TNs?" To
answer the former: we show that TNs can (near to) optimally replicate
-uniform and -adaptive approximation, for any smoothness order of the
target function. Tensor networks thus exhibit universal expressivity w.r.t.
isotropic, anisotropic and mixed smoothness spaces that is comparable with more
general neural networks families such as deep rectified linear unit (ReLU)
networks. Put differently, TNs have the capacity to (near to) optimally
approximate many function classes -- without being adapted to the particular
class in question. To answer the latter: as a candidate model class we consider
approximation classes of TNs and show that these are (quasi-)Banach spaces,
that many types of classical smoothness spaces are continuously embedded into
said approximation classes and that TN approximation classes are themselves not
embedded in any classical smoothness space.Comment: For part I see arXiv:2007.00118, for part II see arXiv:2007.0012
A Performance Study of Variational Quantum Algorithms for Solving the Poisson Equation on a Quantum Computer
Recent advances in quantum computing and their increased availability has led
to a growing interest in possible applications. Among those is the solution of
partial differential equations (PDEs) for, e.g., material or flow simulation.
Currently, the most promising route to useful deployment of quantum processors
in the short to near term are so-called hybrid variational quantum algorithms
(VQAs). Thus, variational methods for PDEs have been proposed as a candidate
for quantum advantage in the noisy intermediate scale quantum (NISQ) era. In
this work, we conduct an extensive study of utilizing VQAs on real quantum
devices to solve the simplest prototype of a PDE -- the Poisson equation.
Although results on noiseless simulators for small problem sizes may seem
deceivingly promising, the performance on quantum computers is very poor. We
argue that direct resolution of PDEs via an amplitude encoding of the solution
is not a good use case within reach of today's quantum devices -- especially
when considering large system sizes and more complicated non-linear PDEs that
are required in order to be competitive with classical high-end solvers.Comment: 19 pages, 18 figure
On Modeling Heterogeneous Wireless Networks Using Non-Poisson Point Processes
Future wireless networks are required to support 1000 times higher data rate,
than the current LTE standard. In order to meet the ever increasing demand, it
is inevitable that, future wireless networks will have to develop seamless
interconnection between multiple technologies. A manifestation of this idea is
the collaboration among different types of network tiers such as macro and
small cells, leading to the so-called heterogeneous networks (HetNets).
Researchers have used stochastic geometry to analyze such networks and
understand their real potential. Unsurprisingly, it has been revealed that
interference has a detrimental effect on performance, especially if not modeled
properly. Interference can be correlated in space and/or time, which has been
overlooked in the past. For instance, it is normally assumed that the nodes are
located completely independent of each other and follow a homogeneous Poisson
point process (PPP), which is not necessarily true in real networks since the
node locations are spatially dependent. In addition, the interference
correlation created by correlated stochastic processes has mostly been ignored.
To this end, we take a different approach in modeling the interference where we
use non-PPP, as well as we study the impact of spatial and temporal correlation
on the performance of HetNets. To illustrate the impact of correlation on
performance, we consider three case studies from real-life scenarios.
Specifically, we use massive multiple-input multiple-output (MIMO) to
understand the impact of spatial correlation; we use the random medium access
protocol to examine the temporal correlation; and we use cooperative relay
networks to illustrate the spatial-temporal correlation. We present several
numerical examples through which we demonstrate the impact of various
correlation types on the performance of HetNets.Comment: Submitted to IEEE Communications Magazin
- …